Establishment of Human Performance Baseline for Image Fusion Algorithms in the LWIR and SWIR Spectra
نویسندگان
چکیده
The performance benchmark process for discriminating between image fusion algorithms presented by Howell [3] and again by Moyer and Howell [6] establishes a performance goal for any aggregate function merging the source band imagery under investigation. It was reported by Moyer and Howell, in “Establishment of Human Performance Baseline for Image Fusion Algorithms in the LWIR and MWIR Spectra,” that increased P(ID) performances could be realized dependent upon how the source band information was presented. In that work, human observers were asked to identify displayed targets using a standard set of tracked military vehicles. The observers performed the ID tasks using LW and MW source band imagery concatenated on a single monitor, presented side by side on a single monitor, temporally interlaced source band imagery on a single monitor, and source band imagery presented to each eye of the observer in parallel. The research presented in this paper explored the impact dissimilar source band information had on the observer’s ability to identify targets without the aid of image fusion algorithms. The spectral bands under consideration for this effort were the LW and SW spectral bands. It was hypothesized that the different display techniques using dissimilar source spectral band information would better allow the observers to choose the portions of the source band images they needed to perform the visual discrimination task of identification above that achievable using the fused superposition images. It was determined after comparing the observers P(ID) performances using the superposition fused images to the resultant P(ID)s’ using these display techniques, that the performances using the superposition fused images were well below that which was achieved by the observers benchmark source band performance. Allowing the observer to view spectral source band imagery in different display formats without the aid of image fusion algorithms, which we refer to as “self-fusion,” allows the experimenter to establish an absolute benchmark for discriminating between image fusion algorithm performances. It was our intent to perform a mirror analysis of the research performed in Moyer and Howell [6] using LW and SW imagery; however the experiment where LW and SW source band imagery was presented to each eye of the observer in parallel could not be completed due to the effects of binocular rivalry [1] caused by the competing information presented in each eye independently. The remaining experiments performed using the LW and MW imagery were repeated in this work using LW and SW imagery. This paper is outlined as follows: a background section describes some common approaches to image fusion along with some common image quality metrics and their shortcomings regarding predicting human task
منابع مشابه
Image Fusion & Mining Tools for a COTS Environment
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed ...
متن کاملFusion of Multi-Sensor Imagery for Night Vision: Color Visualization, Target Learning and Search
1 This work was sponsored by the U.S. Defense Advanced Research Projects Agency, under Air Force Contract F19628-95-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and not necessarily endorsed by the U.S. Air Force. 2 Present Address: MCIS Department, Jacksonville State University, Jacksonville, AL 36265, U.S.A. Abstract We present methods and result...
متن کاملMegacollect 2004: Hyperspectral Collection Experiment of Terrestrial Targets and Backgrounds of the RIT Megascene
This paper describes a collaborative collection campaign to spectrally image and measure a well characterized scene for hyperspectral algorithm development and validation/verification of scene simulation models (DIRSIG). The RIT Megascene, located in the northeast corner of Monroe County near Rochester, New York, has been modeled and characterized under the DIRSIG environment and has been simul...
متن کاملMulti-Focus Image Fusion in DCT Domain using Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor Networks
The purpose of multi-focus image fusion is gathering the essential information and the focused parts from the input multi-focus images into a single image. These multi-focus images are captured with different depths of focus of cameras. A lot of multi-focus image fusion techniques have been introduced using considering the focus measurement in the spatial domain. However, the multi-focus image ...
متن کاملComparative Evaluation of Image Fusion Methods for Hyperspectral and Panchromatic Data Fusion in Agricultural and Urban Areas
Nowadays remote sensing plays a key role in the field of earth science studies due to some of the advantages, including data collection at a very low cost and time on a very large scale. Meanwhile, using hyperspectral data is of great importance due to the high spectral resolution. Because of some limitations, such as hyperspectral imaging technology, it suffers from a reduction in the spatial ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Adv. Inf. Fusion
دوره 8 شماره
صفحات -
تاریخ انتشار 2013